Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Scientific collaboration potential prediction based on dynamic heterogeneous information fusion
Guoshuai MA, Yuhua QIAN, Yayu ZHANG, Junxia LI, Guoqing LIU
Journal of Computer Applications    2023, 43 (9): 2775-2783.   DOI: 10.11772/j.issn.1001-9081.2022081266
Abstract240)   HTML15)    PDF (1968KB)(68)       Save

In the existing scientific collaboration potential prediction methods, feature engineering is used to extract the shallow and static attributes of authors in scientific collaboration networks manually. At the same time, the relationships among heterogeneous entities in the scientific collaboration networks are ignored. To address this shortcoming, a dynamic Collaboration Potential Prediction (CPP) model was proposed to incorporate the potential attribute information of multiple entities in scientific collaboration networks. In this model, the structural features of scholar-scholar collaboration relationships were considered while extracting attributes of heterogeneous entities, and the model was optimized by the collaborative optimization method to realize the prediction of scientific collaboration potential while recommending scientific collaborators for scholars. To verify the effectiveness of the proposed model, the information of more than 500 000 papers published in the China Computer Federation (CCF)-recommended journals and the complete attribute information of related entities were collected and collated. And the temporal collaborative heterogeneous networks of different periods were constructed by the sliding window method to extract the dynamic attribute information of each entity during the evolution of the scientific collaborative network. In addition, to improve the generalization and practicality of the proposed model, the data from different periods were input to train the model randomly. Experimental results show that compared with the suboptimal model — Graph Sample and aggregate network (GraphSAGE), CPP model improves the classification accuracy on collaborator recommendation task by 1.47 percentage points; for the cooperation potential prediction task, the test error of CPP is 1.23% lower than that of GraphSAGE. In conclusion, CPP model can recommend high-quality collaborators for scholars more accurately.

Table and Figures | Reference | Related Articles | Metrics
Multi-depth-of-field 3D shape reconstruction with global spatio-temporal feature coupling
Jiangfeng ZHANG, Tao YAN, Bin CHEN, Yuhua QIAN, Yantao SONG
Journal of Computer Applications    2023, 43 (3): 894-902.   DOI: 10.11772/j.issn.1001-9081.2022101589
Abstract152)   HTML3)    PDF (2603KB)(56)       Save

In response to the inability of existing 3D shape reconstruction models to effectively fuse global spatio-temporal information, a Depth Focus Volume (DFV) module was proposed to retain the transition information of focus and defocus, on this basis, a Global Spatio-Temporal Feature Coupling (GSTFC) model was proposed to extract local and global spatio-temporal feature information of multi-depth-of-field image sequences. Firstly, the 3D-ConvNeXt module and 3D convolutional layer were interspersed in the shrinkage path to capture multi-scale local spatio-temporal features. Meanwhile, the 3D-SwinTransformer module was added to the bottleneck module to capture the global correlations of local spatio-temporal features of multi-depth-of-field image sequences. Then, the local spatio-temporal features and global correlations were fused into global spatio-temporal features through the adaptive parameter layer, which were input into the expansion path to guide and generate focus volume. Finally, the sequence weight information of the focus volume was extracted by DFV and the transition information of focus and defocus was retained to obtain the final depth map. Experimental results show that GSTFC decreases the Root Mean Square Error (RMSE) index by 12.5% on FoD500 dataset compared with the state-of-the-art All-in-Focus Depth Net (AiFDepthNet) model, and retains more depth-of-field transition relationships compared with the traditional Robust Focus Volume Regularization in Shape from Focus (RFVR-SFF) model.

Table and Figures | Reference | Related Articles | Metrics